202 research outputs found

    Realized Correlation Tick-by-Tick

    Get PDF
    We propose the Heterogeneous Autoregressive (HAR) model for the estimation and prediction of realized correlations. We construct a realized correlation measure where both the volatilities and the covariances are computed from tick-by-tick data. As for the realized volatility, the presence of market microstructure can induce significant bias in standard realized covariance measure computed with artificially regularly spaced returns. Contrary to these standard approaches we analyse a simple and unbiased realized covariance estimator that does not resort to the construction of a regular grid, but directly and efficiently employs the raw tick-by-tick returns of the two series. Montecarlo simulations calibrated on realistic market microstructure conditions show that this simple tick-by-tick covariance possesses no bias and the smallest dispersion among the covariance estimators considered in the study. In an empirical analysis on S&P 500 and US bond data we find that realized correlations show significant regime changes in reaction to financial crises. Such regimes must be taken into account to get reliable estimates and forecasts.High frequency data, Realized Correlation, Market Microstructure, Bias correction, HAR, Regimes

    Threshold Bipower Variation and the Impact of Jumps on Volatility Forecasting

    Get PDF
    This study reconsiders the role of jumps for volatility forecasting by showing that jumps have a positive and mostly significant impact on future volatility. This result becomes apparent once volatility is separated into its continuous and discontinuous component using estimators which are not only consistent, but also scarcely plagued by small-sample bias. To this purpose, we introduce the concept of threshold bipower variation, which is based on the joint use of bipower variation and threshold estimation. We show that its generalization (threshold multipower vari- ation) admits a feasible central limit theorem in the presence of jumps and provides less biased estimates, with respect to the standard multipower variation, of the continuous quadratic varia- tion in finite samples. We further provide a new test for jump detection which has substantially more power than tests based on multipower variation. Empirical analysis (on the S&P500 index, individual stocks and US bond yields) shows that the proposed techniques improve significantly the accuracy of volatility forecasts especially in periods following the occurrence of a jump.volatility estimation, jump detection, volatility forecasting, threshold estimation, financial markets

    Volatility forecasting: the jumps do matter

    Get PDF
    This study reconsiders the role of jumps for volatility forecasting by showing that jumps have positive and mostly significant impact on future volatility. This result becomes apparent once volatility is correctly separated into its continuous and discontinuous component. To this purpose, we introduce the concept of threshold multipower variation (TMPV), which is based on the joint use of bipower variation and threshold estimation. With respect to alternative methods, our TMPV estimator provides less biased and robust estimates of the continuous quadratic variation and jumps. This technique also provides a new test for jump detection which has substantially more power than traditional tests. We use this separation to forecast volatility by employing an heterogeneous autoregressive (HAR) model which is suitable to parsimoniously model long memory in realized volatility time series. Empirical analysis shows that the proposed techniques improve significantly the accuracy of volatility forecasts for the S&P500 index, single stocks and US bond yields, especially in periods following the occurrence of a jumpvolatility forecasting, jumps, bipower variation, threshold estimation, stock, bond

    Bond Risk Premia Forecasting: A Simple Approach for ExtractingĀØMacroeconomic Information from a Panel of Indicators

    Get PDF
    We propose a simple but effective estimation procedure to extract the level and the volatility dynamics of a latent macroeconomic factor from a panel of observable indicators. Our approach is based on a multivariate conditionally heteroskedastic exact factor model that can take into account the heteroskedasticity feature shown by most macroeconomic variables and relies on an iterated Kalman filter procedure. In simulations we show the unbiasedness of the proposed estimator and its superiority to different approaches introduced in the literature. Simulation results are confirmed in applications to real inflation data with the goal of forecasting long-term bond risk premia. Moreover, we find that the extracted level and conditional variance of the latent factor for inflation are strongly related to NBER business cycles.Macroeconomic variables; Exact factor model; Kalman filter; Heteroskedasticity; Forecasting bond risk premia; Inflation measures; Business cycles

    Volatility Forecasting: The Jumps Do Matter

    Get PDF
    This study reconsiders the role of jumps for volatility forecasting by showing that jumps have a positive and mostly significant impact on future volatility. This result becomes apparent once volatility is correctly separated into its continuous and discontinuous component. To this purpose, we introduce the concept of threshold multipower variation (TMPV), which is based on the joint use of bipower variation and threshold estimation. With respect to alternative methods, our TMPV estimator provides less biased and robust estimates of the continuous quadratic variation and jumps. This technique also provides a new test for jump detection which has substantially more power than traditional tests. We use this separation to forecast volatility by employing an heterogeneous autoregressive (HAR) model which is suitable to parsimoniously model long memory in realized volatility time series. Empirical analysis shows that the proposed techniques improve significantly the accuracy of volatility forecasts for the S&P500 index, single stocks and US bond yields, especially in periods following the occurrence of a jump.volatility forecasting, jumps, bipower variation, threshold estimation, stock, bond

    Filtering and Smoothing with Score-Driven Models

    Full text link
    We propose a methodology for filtering, smoothing and assessing parameter and filtering uncertainty in misspecified score-driven models. Our technique is based on a general representation of the well-known Kalman filter and smoother recursions for linear Gaussian models in terms of the score of the conditional log-likelihood. We prove that, when data are generated by a nonlinear non-Gaussian state-space model, the proposed methodology results from a first-order expansion of the true observation density around the optimal filter. The error made by such approximation is assessed analytically. As shown in extensive Monte Carlo analyses, our methodology performs very similarly to exact simulation-based methods, while remaining computationally extremely simple. We illustrate empirically the advantages in employing score-driven models as misspecified filters rather than purely predictive processes.Comment: 33 pages, 5 figures, 6 table

    A Score-Driven Conditional Correlation Model for Noisy and Asynchronous Data: an Application to High-Frequency Covariance Dynamics

    Get PDF
    The analysis of the intraday dynamics of correlations among high-frequency returns is challenging due to the presence of asynchronous trading and market microstructure noise. Both effects may lead to significant data reduction and may severely underestimate correlations if traditional methods for low-frequency data are employed. We propose to model intraday log-prices through a multivariate local-level model with score-driven covariance matrices and to treat asynchronicity as a missing value problem. The main advantages of this approach are: (i) all available data are used when filtering correlations, (ii) market microstructure noise is taken into account, (iii) estimation is performed through standard maximum likelihood methods. Our empirical analysis, performed on 1-second NYSE data, shows that opening hours are dominated by idiosyncratic risk and that a market factor progressively emerges in the second part of the day. The method can be used as a nowcasting tool for high-frequency data, allowing to study the real-time response of covariances to macro-news announcements and to build intraday portfolios with very short optimization horizons.Comment: 30 pages, 10 figures, 7 table

    Homogeneous Volatility Bridge Estimators

    Full text link
    We present a theory of homogeneous volatility bridge estimators for log-price stochastic processes. The main tool of our theory is the parsimonious encoding of the information contained in the open, high and low prices of incomplete bridge, corresponding to given log-price stochastic process, and in its close value, for a given time interval. The efficiency of the new proposed estimators is favorably compared with that of the Garman-Klass and Parkinson estimators.Comment: 25 pages, 9 figure

    The volatility of realized volatility

    Get PDF
    Using unobservable conditional variance as measure, latent-variable approaches, such as GARCH and stochastic-volatility models, have traditionally been dominating the empirical finance literature. In recent years, with the availability of high-frequency financial market data modeling realized volatility has become a new and innovative research direction. By constructing "observable" or realized volatility series from intraday transaction data, the use of standard time series models, such as ARFIMA models, have become a promising strategy for modeling and predicting (daily) volatility. In this paper, we show that the residuals of the commonly used time-series models for realized volatility exhibit non-Gaussianity and volatility clustering. We propose extensions to explicitly account for these properties and assess their relevance when modeling and forecasting realized volatility. In an empirical application for S&P500 index futures we show that allowing for time-varying volatility of realized volatility leads to a substantial improvement of the model's fit as well as predictive performance. Furthermore, the distributional assumption for residuals plays a crucial role in density forecasting. Klassifikation: C22, C51, C52, C5

    The Volatility of Realized Volatility

    Get PDF
    Using unobservable conditional variance as measure, latentā€“variable approaches, such as GARCH and stochasticā€“volatility models, have traditionally been dominating the empirical finance literature. In recent years, with the availability of highā€“frequency financial market data modeling realized volatility has become a new and innovative research direction. By constructing ā€œobservableā€ or realized volatility series from intraday transaction data, the use of standard time series models, such as ARFIMA models, have become a promising strategy for modeling and predicting (daily) volatility. In this paper, we show that the residuals of the commonly used timeā€“series models for realized volatility exhibit nonā€“Gaussianity and volatility clustering. We propose extensions to explicitly account for these properties and assess their relevance when modeling and forecasting realized volatility. In an empirical application for S&P500 index futures we show that allowing for timeā€“varying volatility of realized volatility leads to a substantial improvement of the modelā€™s fit as well as predictive performance. Furthermore, the distributional assumption for residuals plays a crucial role in density forecasting.Finance, Realized Volatility, Realized Quarticity, GARCH, Normal Inverse Gaussian Distribution, Density Forecasting
    • ā€¦
    corecore